Valhalla's Things: I've been influenced
Tags: madeof:atoms
Publisher: | Dutton Books |
Copyright: | February 2019 |
Printing: | 2020 |
ISBN: | 0-7352-3190-7 |
Format: | Kindle |
Pages: | 339 |
Publisher: | Tor |
Copyright: | 2022 |
ISBN: | 1-250-24403-X |
Format: | Kindle |
Pages: | 242 |
"We're a mystery religion," said the abbess, when she'd had a bit more wine than usual, "for people who have too much work to do to bother with mysteries. So we simply get along as best we can. Occasionally someone has a vision, but [the goddess] doesn't seem to want anything much, and so we try to return the favor."If you have read any other Kingfisher novels, much of this will be familiar: the speculative asides, the dogged determination, the slightly askew nature of the world, the vibes-based world-building that feels more like a fairy tale than a carefully constructed magic system, and the sense that the main characters (and nearly all of the supporting characters) are average people trying to play the hands they were dealt as ethically as they can. You will know that the tentative and woman-initiated romance is coming as soon as the party meets the paladin type who is almost always the romantic interest in one of these books. The emotional tone of the book is a bit predictable for regular readers, but Ursula Vernon's brain is such a delightful place to spend some time that I don't mind.
Marra had not managed to be pale and willowy and consumptive at any point in eighteen years of life and did not think she could achieve it before she died.Nettle & Bone won the Hugo for Best Novel in 2023. I'm not sure why this specific T. Kingfisher novel won and not any of the half-dozen earlier novels she's written in a similar style, but sure, I have no objections. I'm glad one of them won; they're all worth reading and hopefully that will help more people discover this delightful style of fantasy that doesn't feel like what anyone else is doing. Recommended, although be prepared for a few more horror touches than normal and a rather grim first chapter. Content warnings: domestic abuse. The dog... lives? Is equally as alive at the end of the book as it was at the end of the first chapter? The dog does not die; I'll just leave it at that. (Neither does the chicken.) Rating: 8 out of 10
develop
branch
where changes are added or merged to test new versions, a temporary
release/#.#.#
to generate the release candidate versions and a main
branch
where the final versions are published).
semantic-release
It is a Node.js application designed to manage project
versioning information on Git Repositories using a
Continuous integration
system (in this post we will use gitlab-ci
)semantic-release
uses semver for versioning
(release versions use the format MAJOR.MINOR.PATCH
) and commit messages are
parsed to determine the next version number to publish.
If after analyzing the commits the version number has to be changed, the command
updates the files we tell it to (i.e. the package.json
file for nodejs
projects and possibly a CHANGELOG.md
file), creates a new commit with the
changed files, creates a tag with the new version and pushes the changes to the
repository.
When running on a CI/CD system we usually generate the artifacts related to a
release (a package, a container image, etc.) from the tag, as it includes the
right version number and usually has passed all the required tests (it is a good
idea to run the tests again in any case, as someone could create a tag manually
or we could run extra jobs when building the final assets if they fail it is
not a big issue anyway, numbers are cheap and infinite, so we can skip releases
if needed).
MAJOR
version.
The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
release
, maintenance
and
pre-release
, but for now I m not using maintenance
ones.
The branches I use and their types are:
main
as release branch (final versions are published from there)develop
as pre release branch (used to publish development and testing
versions with the format #.#.#-SNAPSHOT.#
)release/#.#.#
as pre release branches (they are created from develop
to publish release candidate versions with the format #.#.#-rc.#
and once
they are merged with main
they are deleted)main
) the version number is updated as follows:
MAJOR
number is incremented if a commit with a BREAKING CHANGE:
footer or an exclamation (!
) after the type/scope is found in the list of
commits found since the last version change (it looks for tags on the same
branch).MINOR
number is incremented if the MAJOR number is not going to be
changed and there is a commit with type feat
in the commits found since
the last version change.PATCH
number is incremented if neither the MAJOR nor the MINOR numbers
are going to be changed and there is a commit with type fix
in the the
commits found since the last version change.develop
and release/#.#.#
) the version and
pre release numbers are always calculated from the last published version
available on the branch (i. e. if we published version 1.3.2
on main
we need
to have the commit with that tag on the develop
or release/#.#.#
branch
to get right what will be the next version).
The version number is updated as follows:
MAJOR
number is incremented if a commit with a BREAKING CHANGE:
footer or an exclamation (!
) after the type/scope is found in the list of
commits found since the last released version.In our example it was 1.3.2
and the version is updated to 2.0.0-SNAPSHOT.1
or 2.0.0-rc.1
depending on the branch.MINOR
number is incremented if the MAJOR number is not going to be
changed and there is a commit with type feat
in the commits found since
the last released version.In our example the release was 1.3.2
and the version is updated to
1.4.0-SNAPSHOT.1
or 1.4.0-rc.1
depending on the branch.PATCH
number is incremented if neither the MAJOR nor the MINOR numbers
are going to be changed and there is a commit with type fix
in the the
commits found since the last version change.In our example the release was 1.3.2
and the version is updated to
1.3.3-SNAPSHOT.1
or 1.3.3-rc.1
depending on the branch.MAJOR
, MINOR
and PATCH
numbers are not going to be changed but there is a commit that would
otherwise update the version (i.e. a fix
on 1.3.3-SNAPSHOT.1
will set the
version to 1.3.3-SNAPSHOT.2
, a fix
or feat
on 1.4.0-rc.1
will set the
version to 1.4.0-rc.2
an so on).nodejs
projects, it can be used
with multiple programming languages and project types.
For nodejs
projects the usual place to put the configuration is the project s
package.json
, but I prefer to use the .releaserc
file instead.
As I use a common set of CI templates, instead of using a .releaserc
on each
project I generate it on the fly on the jobs that need it, replacing values
related to the project type and the current branch on a template using the
tmpl command (lately I use a
branch of my own fork while I wait
for some feedback from upstream, as you will see on the Dockerfile
).gitlab-ci
job we use the image built from the
following Dockerfile
:
semantic-release
is executed when new commits are added
to the develop
, release/#.#.#
or main
branches (basically when something
is merged or pushed) and after all tests have passed (we don t want to create a
new version that does not compile or passes at least the unit tests).
The job is something like the following:
semantic_release:
image: $SEMANTIC_RELEASE_IMAGE
rules:
- if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
when: always
stage: release
before_script:
- echo "Loading scripts.sh"
- . $ASSETS_DIR/scripts.sh
script:
- sr_gen_releaserc_json
- git_push_setup
- semantic-release
SEMANTIC_RELEASE_IMAGE
variable contains the URI of the image built
using the Dockerfile
above and the sr_gen_releaserc_json
and
git_push_setup
are functions defined on the $ASSETS_DIR/scripts.sh
file:
sr_gen_releaserc_json
function generates the .releaserc.json
file
using the tmpl
command.git_push_setup
function configures git
to allow pushing changes to the
repository with the semantic-release
command, optionally signing them with a
SSH key.sr_gen_releaserc_json
functionThe code for the sr_gen_releaserc_json
function is the following:
sr_gen_releaserc_json()
# Use nodejs as default project_type
project_type="$ PROJECT_TYPE:-nodejs "
# REGEX to match the rc_branch name
rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
# PATHS on the local ASSETS_DIR
assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
sr_local_plugin="$ assets_dir /local-plugin.cjs"
releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
# Destination PATH
releaserc_json=".releaserc.json"
# Create an empty pipeline_values_yaml if missing
test -f "$pipeline_values_yaml" : >"$pipeline_values_yaml"
# Create the pipeline_runtime_values_yaml file
echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
# Add the rc_branch name if we are on an rc_branch
if [ "$(echo "$CI_COMMIT_BRANCH" sed -ne "/$rc_branch_regex/ p ")" ]; then
echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"
sed -ne "/$rc_branch_regex/ p ")" ]; then
echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
>>"$pipeline_runtime_values_yaml"
fi
echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
# Create the releaserc_json file
tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
"$releaserc_tmpl" jq . >"$releaserc_json"
# Remove the pipeline_runtime_values_yaml file
rm -f "$pipeline_runtime_values_yaml"
# Print the releaserc_json file
print_file_collapsed "$releaserc_json"
# --*-- BEG: NOTE --*--
# Rename the package.json to ignore it when calling semantic release.
# The idea is that the local-plugin renames it back on the first step of the
# semantic-release process.
# --*-- END: NOTE --*--
if [ -f "package.json" ]; then
echo "Renaming 'package.json' to 'package.json_disabled'"
mv "package.json" "package.json_disabled"
fi
gitlab
except the
ASSETS_DIR
and PROJECT_TYPE
; in the complete pipelines the ASSETS_DIR
is
defined on a common file included by all the pipelines and the project type is
defined on the .gitlab-ci.yml
file of each project.
If you review the code you will see that the file processed by the tmpl
command is named releaserc.json.tmpl
, its contents are shown here:
"plugins": [
- if .sr_local_plugin
" .sr_local_plugin ",
- end
[
"@semantic-release/commit-analyzer",
"preset": "conventionalcommits",
"releaseRules": [
"breaking": true, "release": "major" ,
"revert": true, "release": "patch" ,
"type": "feat", "release": "minor" ,
"type": "fix", "release": "patch" ,
"type": "perf", "release": "patch"
]
],
- if .replacements
[
"semantic-release-replace-plugin",
"replacements": .replacements toJson
],
- end
"@semantic-release/release-notes-generator",
- if eq .branch "main"
[
"@semantic-release/changelog",
"changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"
],
- end
[
"@semantic-release/git",
"assets": if .assets .assets toJson else [] end ,
"message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
],
[
"@semantic-release/gitlab",
"gitlabUrl": " .gitlab_url ", "successComment": false
]
],
"branches": [
"name": "develop", "prerelease": "SNAPSHOT" ,
- if .rc_branch
"name": " .rc_branch ", "prerelease": "rc" ,
- end
"main"
]
releaserc_values.yaml
) that includes the following keys and values:
branch
: the name of the current branchgitlab_url
: the URL of the gitlab server (the value is taken from the
CI_SERVER_URL
variable)rc_branch
: the name of the current rc branch; we only set the value if we
are processing one because semantic-release
only allows one branch to match
the rc
prefix and if we use a wildcard (i.e. release/*
) but the users
keep more than one release/#.#.#
branch open at the same time the calls to
semantic-release
will fail for sure.sr_local_plugin
: the path to the local plugin we use (shown later)values_$ project_type _project.yaml
file that
includes settings specific to the project type, the one for nodejs
is as
follows:
replacements:
- files:
- "package.json"
from: "\"version\": \".*\""
to: "\"version\": \"$ nextRelease.version \""
assets:
- "CHANGELOG.md"
- "package.json"
replacements
section is used to update the version
field on the relevant
files of the project (in our case the package.json
file) and the assets
section includes the files that will be committed to the repository when the
release is published (looking at the template you can see that the
CHANGELOG.md
is only updated for the main
branch, we do it this way because
if we update the file on other branches it creates a merge nightmare and we are
only interested on it for released versions anyway).
The local plugin adds code to rename the package.json_disabled
file to
package.json
if present and prints the last and next versions on the logs with
a format that can be easily parsed using sed
:
git_push_setup
functionThe code for the git_push_setup
function is the following:
git_push_setup()
# Update global credentials to allow git clone & push for all the group repos
git config --global credential.helper store
cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
# Define user name, mail and signing key for semantic-release
user_name="$SR_USER_NAME"
user_email="$SR_USER_EMAIL"
ssh_signing_key="$SSH_SIGNING_KEY"
# Export git user variables
export GIT_AUTHOR_NAME="$user_name"
export GIT_AUTHOR_EMAIL="$user_email"
export GIT_COMMITTER_NAME="$user_name"
export GIT_COMMITTER_EMAIL="$user_email"
# Sign commits with ssh if there is a SSH_SIGNING_KEY variable
if [ "$ssh_signing_key" ]; then
echo "Configuring GIT to sign commits with SSH"
ssh_keyfile="/tmp/.ssh-id"
: >"$ssh_keyfile"
chmod 0400 "$ssh_keyfile"
echo "$ssh_signing_key" tr -d '\r' >"$ssh_keyfile"
git config gpg.format ssh
git config user.signingkey "$ssh_keyfile"
git config commit.gpgsign true
fi
GITLAB_REPOSITORY_TOKEN
variable (set on the
CI/CD variables section of the project or group we want) contains a token with
read_repository
and write_repository
permissions on all the projects we are
going to use this function.
The SR_USER_NAME
and SR_USER_EMAIL
variables can be defined on a common file
or the CI/CD variables section of the project or group we want to work with and
the script assumes that the optional SSH_SIGNING_KEY
is exported as a CI/CD
default value of type variable (that is why the keyfile is created on the fly)
and git
is configured to use it if the variable is not empty.
GITLAB_REPOSITORY_TOKEN
and SSH_SIGNING_KEY
contain secrets, so probably is a good idea to make them protected
(if you do
that you have to make the develop
, main
and release/*
branches protected
too).semantic-release
user has to be able to push to all the projects on those
protected branches, it is a good idea to create a dedicated user and add it as a
MAINTAINER
for the projects we want (the MAINTAINERS
need to be able to push
to the branches), or, if you are using a Gitlab with a Premium license you can
use the
api
to allow the semantic-release
user to push to the protected branches without
allowing it for any other user.semantic-release
commandOnce we have the .releaserc
file and the git
configuration ready we run the
semantic-release
command.
If the branch we are working with has one or more commits that will increment
the version, the tool does the following (note that the steps are described are
the ones executed if we use the configuration we have generated):
version
field on the package.json
file).CHANGELOG.md
file adding the release notes if we are going to
publish the file (when we are on the main
branch).assets
key have
changed and uses the commit message we have defined, replacing the variables
for their current values.gitlab
plugin after tagging it also creates a release
on the project with the tag name and the release notes.git
workflows and merges between branchesIt is very important to remember that semantic-release
looks at the commits of
a given branch when calculating the next version to publish, that has two
important implications:
semantic-release
needs to calculate
the next version and even if we use the right prefix for the squashed commit
(fix
, feat
, ) we miss all the messages that would otherwise go to the
CHANGELOG.md
file.main
branch changes into the develop
one after each release
tag is created; in my pipelines the fist job that processes a release tag
creates a branch from the tag and an MR to merge it to develop
.
The important thing about that MR is that is must not be squashed, if we do that
the tag commit will probably be lost, so we need to be careful.
To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
develop
we can create a MR pushing the followup
branch after committing the changes, but we have to make sure that we don t
squash the commits when merging or it will not work as we want.
Publisher: | Fairwood Press |
Copyright: | November 2023 |
ISBN: | 1-958880-16-7 |
Format: | Kindle |
Pages: | 257 |
Series: | Legends & Lattes #2 |
Publisher: | Tor |
Copyright: | 2023 |
ISBN: | 1-250-88611-2 |
Format: | Kindle |
Pages: | 337 |
Series: | Discworld #35 |
Publisher: | Clarion Books |
Copyright: | 2006 |
Printing: | 2007 |
ISBN: | 0-06-089033-9 |
Format: | Mass market |
Pages: | 450 |
Publisher: | Princeton University Press |
Copyright: | 2006, 2008 |
Printing: | 2008 |
ISBN: | 0-691-13640-8 |
Format: | Trade paperback |
Pages: | 278 |
color_encoding
and color_range
for
the input colorspace conversion, that is not covered by this work.
In case you re not familiar with AMD shared code, what we need to do is
basically draw a map and navigate there!
We have some DRM color properties after blending, but nothing before blending
yet. But much of the hardware programming was already implemented in the AMD DC
layer, thanks to the shared code.
Still both the DRM interface and its connection to the shared code were
missing. That s when the search begins!
AMD driver-specific color pipeline:
Looking at the color capabilities of the hardware, we arrive at this initial
set of properties. The path wasn t exactly like that. We had many iterations
and discoveries until reached to this pipeline.
The Plane Degamma is our first driver-specific property before blending. It s
used to linearize the color space from encoded values to light linear values.
We can use a pre-defined transfer function or a user lookup table (in short,
LUT) to linearize the color space.
Pre-defined transfer functions for plane degamma are hardcoded curves that go
to a specific hardware block called DPP Degamma ROM. It supports the following
transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power
curves Gamma 2.2, Gamma 2.4 and Gamma 2.6.
We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six
(4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of
drm_color_lut
that goes to the DPP Gamma Correction block.
We also have now a color transformation matrix (CTM) for color space
conversion.
It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block.
Both pre- and post-blending matrices were previously gone to the same color
block. We worked on detaching them to clear both paths.
Now each CTM goes on its own way.
Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color
values of an image to increase their overall brightness.
This is useful for converting images from a standard dynamic range (SDR) to a
high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent
transforms need to use the PQ(HDR) transfer functions.
And we need a 3D LUT. But 3D LUT has a limited number of entries in each
dimension, so we want to use it in a colorspace that is optimized for human
vision. It means in a non-linear space. To deliver it, userspace may need one
1D LUT before 3D LUT to delinearize content and another one after to linearize
content again for blending.
The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no
hardcoded curves for shaper TF, but we can use the AMD color module in the
driver to build the following shaper curves from pre-defined coefficients. The
color module combines the TF and the user LUT values into the LUT that goes to
the DPP Shaper RAM block.
Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color
transformations and adjustments between color channels.
3D LUT is also more complex to manage and requires more computational
resources, as a consequence, its number of entries is usually limited. To
overcome this restriction, the array contains samples from the approximated
function and values between samples are estimated by tetrahedral interpolation.
AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost
dimension, red the innermost.
As mentioned, we need a post-3D-LUT curve to linearize the color space before
blending. This is done by Blend TF and LUT.
Similar to shaper TF, there are no hardcoded curves for Blend TF. The
pre-defined curves are the same as the Degamma block, but calculated by the
color module. The resulting LUT goes to the DPP Blend RAM block.
Now we have everything connected before blending. As a conflict between plane
and CRTC Degamma was inevitable, our approach doesn t accept that both are set
at the same time.
We also optimized the conversion of the framebuffer to wire encoding by adding
support to pre-defined CRTC Gamma TF.
Again, there are no hardcoded curves and TF and LUT are combined by the AMD
color module. The same types of shaper curves are supported. The resulting LUT
goes to the MPC Gamma RAM block.
Finally, we arrived in the final version of DRM/AMD driver-specific color
management pipeline. With this knowledge, you re ready to better enjoy the
rainbow treasure of AMD display hardware and the world of graphics computing.
With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD
GPU. We highlight here how we map the Gamescope color pipeline to each AMD
color block.
Future works:
The search for the rainbow treasure is not over! The Linux DRM subsystem
contains many hidden treasures from different vendors. We want more complex
color transformations and adjustments available on Linux. We also want to
expose all GPU color capabilities from all hardware vendors to the Linux
userspace.
Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews.
The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs!
Any questions?
Add a data partition of size 1G to the Debian installer ISO using an ext4 partition
# mk-data-partition -s 1G debian-12.2.0-amd64-netinst.iso
Create the data partition using an exFAT file system on USB named /dev/sdb.
First copy (or dd) the ISO onto the USB stick. Then add the data partition
to the USB stick.
# cp faicd64-large_6.0.3.iso /dev/sdb
# mk-data-partition -F /dev/sdb
Create the data partition and copy directories A and B to it
# mk-data-partition -c debian-12.2.0-amd64-netinst.iso A B
The next FAI version will use this in different parts of an
installation. A blog post about this will follow.
A new idea for our Debian installer ISO
Here are my ideas how the Debian installer could use such a partition
if it automatically detects and mounts it (by it's file system label):
sahilister
There are various ways in which the installation could be done, in our setup here are the pre-requisites. compose.yml
file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml
here. services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
ports:
- 8080:8080
environment: # Is needed when using any of the options below
# - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
- APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
- NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir. Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
# - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
# - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
# - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
# - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container. Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - SKIP_DOMAIN_VALIDATION=true
# # Uncomment the following line when using SELinux
# security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8080:8080
environment:
- APACHE_PORT=32323
- APACHE_IP_BINDING=127.0.0.1
- NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323
above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT
do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml
file, you can start the containers using $ docker-compose -f compose.yml up -d
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.
map $http_upgrade $connection_upgrade
default upgrade;
&apos&apos close;
server
listen 80;
#listen [::]:80; # comment to disable IPv6
if ($scheme = "http")
return 301 https://$host$request_uri;
listen 443 ssl http2; # for nginx versions below v1.25.1
#listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
# listen 443 ssl; # for nginx v1.25.1+
# listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
# http2 on; # uncomment to enable HTTP/2 - supported on nginx v1.25.1+
# http3 on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# quic_retry on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# add_header Alt-Svc &aposh3=":443"; ma=86400' # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# listen 443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
# listen [::]:443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
server_name cloud.example.com;
location /
proxy_pass http://127.0.0.1:32323$request_uri;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
client_body_buffer_size 512k;
proxy_read_timeout 86400s;
client_max_body_size 0;
# Websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
# Optional settings:
# OCSP stapling
# ssl_stapling on;
# ssl_stapling_verify on;
# ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
# replace with the IP address of your resolver
# resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using $ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with $ sudo nginx -s reload
domain.tld:8080
, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this. It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this. now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like thishere you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in. The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister
. Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containers
It will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one. wait patiently for everything to turn green. once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud
button or go to your configured domain to access the login screen. You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master containerdocker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
docker volume rm <volume-name>
docker network rm nextcloud-aio
sbuild
...dcf.debian.net
to make daily updates on the data sources in the
dc-sources repository.-Wformat -Wformat-security
. These are
frequently pretty simple changes as it was here: all it took was an call
to compileAttributes()
from an updated Rcpp version which now injects
"%s"
as a format string when calling
Rf_error()
.
The (very short) NEWS entry for this release follows.
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in version 0.1.11 (2023-11-28)
RcppExports.cpp
has been regenerated under an update Rcpp to address a
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
-book
in the name have been imposed on A4 paper for a
16 pages signature. All of the fonts have been converted to paths, for
ease of printing (yes, this means that customizing the font requires
running the script, sorry).
A few planners in English:
--book
files are impressed for a 3 sheet (12 pages) signature.
Next.